7 research outputs found

    CNN-based Landmark Detection in Cardiac CTA Scans

    Full text link
    Fast and accurate anatomical landmark detection can benefit many medical image analysis methods. Here, we propose a method to automatically detect anatomical landmarks in medical images. Automatic landmark detection is performed with a patch-based fully convolutional neural network (FCNN) that combines regression and classification. For any given image patch, regression is used to predict the 3D displacement vector from the image patch to the landmark. Simultaneously, classification is used to identify patches that contain the landmark. Under the assumption that patches close to a landmark can determine the landmark location more precisely than patches farther from it, only those patches that contain the landmark according to classification are used to determine the landmark location. The landmark location is obtained by calculating the average landmark location using the computed 3D displacement vectors. The method is evaluated using detection of six clinically relevant landmarks in coronary CT angiography (CCTA) scans: the right and left ostium, the bifurcation of the left main coronary artery (LM) into the left anterior descending and the left circumflex artery, and the origin of the right, non-coronary, and left aortic valve commissure. The proposed method achieved an average Euclidean distance error of 2.19 mm and 2.88 mm for the right and left ostium respectively, 3.78 mm for the bifurcation of the LM, and 1.82 mm, 2.10 mm and 1.89 mm for the origin of the right, non-coronary, and left aortic valve commissure respectively, demonstrating accurate performance. The proposed combination of regression and classification can be used to accurately detect landmarks in CCTA scans.Comment: This work was submitted to MIDL 2018 Conferenc

    Deep Learning-Based Regression and Classification for Automatic Landmark Localization in Medical Images

    Get PDF
    In this study, we propose a fast and accurate method to automatically localize anatomical landmarks in medical images. We employ a global-to-local localization approach using fully convolutional neural networks (FCNNs). First, a global FCNN localizes multiple landmarks through the analysis of image patches, performing regression and classification simultaneously. In regression, displacement vectors pointing from the center of image patches towards landmark locations are determined. In classification, presence of landmarks of interest in the patch is established. Global landmark locations are obtained by averaging the predicted displacement vectors, where the contribution of each displacement vector is weighted by the posterior classification probability of the patch that it is pointing from. Subsequently, for each landmark localized with global localization, local analysis is performed. Specialized FCNNs refine the global landmark locations by analyzing local sub-images in a similar manner, i.e. by performing regression and classification simultaneously and combining the results. Evaluation was performed through localization of 8 anatomical landmarks in CCTA scans, 2 landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We demonstrate that the method performs similarly to a second observer and is able to localize landmarks in a diverse set of medical images, differing in image modality, image dimensionality, and anatomical coverage.Comment: 12 pages, accepted at IEEE transactions in Medical Imagin

    Generative models for reproducible coronary calcium scoring

    No full text
    Purpose: Coronary artery calcium (CAC) score, i.e., the amount of CAC quantified in CT, is a strong and independent predictor of coronary heart disease (CHD) events. However, CAC scoring suffers from limited interscan reproducibility, which is mainly due to the clinical definition requiring application of a fixed intensity level threshold for segmentation of calcifications. This limitation is especially pronounced in non-electrocardiogram-synchronized computed tomography (CT) where lesions are more impacted by cardiac motion and partial volume effects. Therefore, we propose a CAC quantification method that does not require a threshold for segmentation of CAC. Approach: Our method utilizes a generative adversarial network (GAN) where a CT with CAC is decomposed into an image without CAC and an image showing only CAC. The method, using a cycle-consistent GAN, was trained using 626 low-dose chest CTs and 514 radiotherapy treatment planning (RTP) CTs. Interscan reproducibility was compared to clinical calcium scoring in RTP CTs of 1662 patients, each having two scans. Results: A lower relative interscan difference in CAC mass was achieved by the proposed method: 47% compared to 89% manual clinical calcium scoring. The intraclass correlation coefficient of Agatston scores was 0.96 for the proposed method compared to 0.91 for automatic clinical calcium scoring. Conclusions: The increased interscan reproducibility achieved by our method may lead to increased reliability of CHD risk categorization and improved accuracy of CHD event prediction

    Generative Models for Reproducible Coronary Calcium Scoring

    Full text link
    Purpose: Coronary artery calcium (CAC) score, i.e. the amount of CAC quantified in CT, is a strong and independent predictor of coronary heart disease (CHD) events. However, CAC scoring suffers from limited interscan reproducibility, which is mainly due to the clinical definition requiring application of a fixed intensity level threshold for segmentation of calcifications. This limitation is especially pronounced in non-ECG-synchronized CT where lesions are more impacted by cardiac motion and partial volume effects. Therefore, we propose a CAC quantification method that does not require a threshold for segmentation of CAC. Approach: Our method utilizes a generative adversarial network where a CT with CAC is decomposed into an image without CAC and an image showing only CAC. The method, using a CycleGAN, was trained using 626 low-dose chest CTs and 514 radiotherapy treatment planning CTs. Interscan reproducibility was compared to clinical calcium scoring in radiotherapy treatment planning CTs of 1,662 patients, each having two scans. Results: A lower relative interscan difference in CAC mass was achieved by the proposed method: 47% compared to 89% manual clinical calcium scoring. The intraclass correlation coefficient of Agatston scores was 0.96 for the proposed method compared to 0.91 for automatic clinical calcium scoring. Conclusions: The increased interscan reproducibility achieved by our method may lead to increased reliability of CHD risk categorization and improved accuracy of CHD event prediction.Comment: In pres

    Automatic segmentation of the olfactory bulbs in MRI

    No full text
    A decrease in volume of the olfactory bulbs is an early marker for neurodegenerative diseases, such as Parkinson's and Alzheimer's disease. Recently, asymmetric volumes of olfactory bulbs present in postmortem MRIs of COVID-19 patients indicate that the olfactory bulbs might play an important role in the entrance of the disease in the central nervous system. Hence, volumetric assessment of the olfactory bulbs can be valuable for various conditions. Given that manual annotation of the olfactory bulbs in MRI to determine their volume is tedious, we propose a method for their automatic segmentation. To mitigate the class imbalance caused by the small volume of the olfactory bulbs, we first localize the center of each olfactory bulb in a scan using convolutional neural networks (CNNs). We use these center locations to extract a bounding box containing both olfactory bulbs. Subsequently, the slices present in the bounding box are analyzed by a segmentation CNN that classifies each voxel as left olfactory bulb, right olfactory bulb, or background. The method achieved median (IQR) Dice coefficients of 0.84 (0.08) and 0.83 (0.08), and Average Symmetrical Surface Distances of 0.12 (0.08) and 0.13 (0.08) mm for the left and the right olfactory bulb, respectively. Wilcoxon Signed Rank tests showed no significant difference between the volumes computed from the reference annotation and the automatic segmentations. Analysis took only 0.20 second per scan and the results indicate that the proposed method could be a first step towards large-scale studies analyzing pathology and morphology of the olfactory bulbs

    Knowledge distillation with ensembles of convolutional neural networks for medical image segmentation

    No full text
    Purpose: Ensembles of convolutional neural networks (CNNs) often outperform a single CNN in medical image segmentation tasks, but inference is computationally more expensive and makes ensembles unattractive for some applications. We compared the performance of differently constructed ensembles with the performance of CNNs derived from these ensembles using knowledge distillation, a technique for reducing the footprint of large models such as ensembles. Approach: We investigated two different types of ensembles, namely, diverse ensembles of networks with three different architectures and two different loss-functions, and uniform ensembles of networks with the same architecture but initialized with different random seeds. For each ensemble, additionally, a single student network was trained to mimic the class probabilities predicted by the teacher model, the ensemble. We evaluated the performance of each network, the ensembles, and the corresponding distilled networks across three different publicly available datasets. These included chest computed tomography scans with four annotated organs of interest, brain magnetic resonance imaging (MRI) with six annotated brain structures, and cardiac cine-MRI with three annotated heart structures. Results: Both uniform and diverse ensembles obtained better results than any of the individual networks in the ensemble. Furthermore, applying knowledge distillation resulted in a single network that was smaller and faster without compromising performance compared with the ensemble it learned from. The distilled networks significantly outperformed the same network trained with reference segmentation instead of knowledge distillation. Conclusion: Knowledge distillation can compress segmentation ensembles of uniform or diverse composition into a single CNN while maintaining the performance of the ensemble
    corecore